216 research outputs found
Learning from Millions of 3D Scans for Large-scale 3D Face Recognition
Deep networks trained on millions of facial images are believed to be closely
approaching human-level performance in face recognition. However, open world
face recognition still remains a challenge. Although, 3D face recognition has
an inherent edge over its 2D counterpart, it has not benefited from the recent
developments in deep learning due to the unavailability of large training as
well as large test datasets. Recognition accuracies have already saturated on
existing 3D face datasets due to their small gallery sizes. Unlike 2D
photographs, 3D facial scans cannot be sourced from the web causing a
bottleneck in the development of deep 3D face recognition networks and
datasets. In this backdrop, we propose a method for generating a large corpus
of labeled 3D face identities and their multiple instances for training and a
protocol for merging the most challenging existing 3D datasets for testing. We
also propose the first deep CNN model designed specifically for 3D face
recognition and trained on 3.1 Million 3D facial scans of 100K identities. Our
test dataset comprises 1,853 identities with a single 3D scan in the gallery
and another 31K scans as probes, which is several orders of magnitude larger
than existing ones. Without fine tuning on this dataset, our network already
outperforms state of the art face recognition by over 10%. We fine tune our
network on the gallery set to perform end-to-end large scale 3D face
recognition which further improves accuracy. Finally, we show the efficacy of
our method for the open world face recognition problem.Comment: 11 page
Defense against Universal Adversarial Perturbations
Recent advances in Deep Learning show the existence of image-agnostic
quasi-imperceptible perturbations that when applied to `any' image can fool a
state-of-the-art network classifier to change its prediction about the image
label. These `Universal Adversarial Perturbations' pose a serious threat to the
success of Deep Learning in practice. We present the first dedicated framework
to effectively defend the networks against such perturbations. Our approach
learns a Perturbation Rectifying Network (PRN) as `pre-input' layers to a
targeted model, such that the targeted model needs no modification. The PRN is
learned from real and synthetic image-agnostic perturbations, where an
efficient method to compute the latter is also proposed. A perturbation
detector is separately trained on the Discrete Cosine Transform of the
input-output difference of the PRN. A query image is first passed through the
PRN and verified by the detector. If a perturbation is detected, the output of
the PRN is used for label prediction instead of the actual image. A rigorous
evaluation shows that our framework can defend the network classifiers against
unseen adversarial perturbations in the real-world scenarios with up to 97.5%
success rate. The PRN also generalizes well in the sense that training for one
targeted network defends another network with a comparable success rate.Comment: Accepted in IEEE CVPR 201
- …